Closed
Conversation
Remove langchain and the associated pipeline to simplify our codebase and reduce our exposure to problem coming from third party dependencies.
| return OllamaClient( | ||
| base_url=self.config.inference_url, | ||
| model=model_id, | ||
| timeout=self._timeout, |
There was a problem hiding this comment.
OllamaRoleExplanationPipeline references nonexistent self._timeout attribute
Medium Severity
OllamaRoleExplanationPipeline.get_chat_model references self._timeout, but this class inherits from NopRoleExplanationPipeline → NopMetaData → MetaData, none of which initialize _timeout. Only OllamaMetaDataMixin sets that attribute. This will raise an AttributeError at runtime if get_chat_model is called. The old code didn't pass a timeout to OllamaLLM, so this is a regression introduced by the refactor.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.


Remove langchain and the associated pipeline to simplify our codebase
and reduce our exposure to problem coming from third party dependencies.
Note
Medium Risk
Replaces the Ollama inference execution path (prompting, parsing, and timeouts) with a new custom HTTP client, which could change runtime behavior despite reducing dependency surface.
Overview
Removes the entire
model_pipelines/langchainimplementation (configuration, pipeline logic, and tests) and unregisters it frommodel_pipelines/__init__.py.Refactors the
ollamapipeline to no longer depend on LangChain: introduces a smallrequests-basedOllamaClient, re-implements completions/playbook/role/explanation pipelines with local prompt formatting + response unwrapping helpers, and updatesOllamaConfigurationto use the sharedBaseConfig/PipelineConfigurationtypes.Cleans up packaging by dropping
langchain*(and related transitive entries) frompyproject.toml,requirements.txt, anduv.lock, and removeslangchain/pipelines.pyfrompyrightincludes.Written by Cursor Bugbot for commit 62919cf. This will update automatically on new commits. Configure here.